# Natural Language Understanding
Searchmap Preview
MIT
A conversational embedding model optimized for e-commerce search, fine-tuned based on Stella Embed 400M v5, excelling at understanding natural language queries and matching relevant products
Text Embedding
Transformers Supports Multiple Languages

S
VPLabs
25
3
Qwen 0.5B DPO 5epoch
MIT
Transformers is an open-source library provided by Hugging Face for natural language processing (NLP) tasks, supporting various pretrained models.
Large Language Model
Transformers

Q
JayHyeon
25
1
Ecot Openvla 7b Oxe
A pretrained Transformer model for robotic control tasks, supporting basic functions such as motion planning and object grasping
Large Language Model
Transformers

E
Embodied-CoT
2,003
0
Pony Diffusion V6 XL
Pony Diffusion V6 is a versatile SDXL fine-tuned model capable of generating various visuals of furry, beast, or humanoid forms, supporting both SFW and NSFW content.
Image Generation
P
LyliaEngine
1,226
41
Mamba 1.4b Instruct Hf
Insufficient model information, unable to provide specific introduction
Large Language Model
Transformers

M
scottsus
60
1
Sanskritayam Gpt
This model is built based on the Transformers library, and its specific functions and uses require further information for confirmation.
Large Language Model
Transformers

S
thtskaran
17
1
Deberta V3 Large Boolq
MIT
This model is a text classification model fine-tuned on the boolq dataset based on microsoft/deberta-v3-large, designed for answering boolean questions.
Text Classification
Transformers

D
nfliu
32.15k
3
Instructblip Flan T5 Xl 8bit Nf4
MIT
InstructBLIP is a vision instruction tuning model based on BLIP-2, using Flan-T5-xl as the language model, capable of generating descriptions based on images and text instructions.
Image-to-Text
Transformers English

I
Mediocreatmybest
22
0
Lamini GPT 774M
Based on the gpt2-large architecture, fine-tuned on 2.58 million instruction-tuning samples, this 774M-parameter language model is suitable for natural language instruction response tasks
Large Language Model
Transformers English

L
MBZUAI
862
13
Vilanocr
Urdu language model based on the transformers library, suitable for natural language processing tasks.
Large Language Model
Transformers Other

V
musadac
24
0
Bert Semantic Similarity
A BERT model fine-tuned on the SNLI corpus for predicting semantic similarity scores between two sentences.
Text Embedding
B
keras-io
22
9
Deberta Large
MIT
DeBERTa is an improved BERT model that enhances performance through a disentangled attention mechanism and an enhanced masked decoder, surpassing BERT and RoBERTa in multiple natural language understanding tasks.
Large Language Model
Transformers English

D
microsoft
15.07k
16
Deberta V2 Xxlarge Mnli
MIT
DeBERTa V2 XXLarge is an enhanced BERT variant based on the disentangled attention mechanism, surpassing RoBERTa and XLNet in natural language understanding tasks with 1.5 billion parameters
Large Language Model
Transformers English

D
microsoft
4,077
8
Chinese Xlnet Mid
Apache-2.0
A Chinese-oriented XLNet pretrained model aimed at enriching Chinese natural language processing resources and providing diverse Chinese pretrained model options.
Large Language Model
Transformers Chinese

C
hfl
120
9
Deberta Base
MIT
DeBERTa is an improved BERT model based on the disentangled attention mechanism and enhanced masked decoder, excelling in multiple natural language understanding tasks.
Large Language Model English
D
microsoft
298.78k
78
Deberta V2 Xxlarge
MIT
DeBERTa V2 XXLarge is an improved BERT model based on disentangled attention and enhanced mask decoding, with 1.5 billion parameters, surpassing BERT and RoBERTa performance on multiple natural language understanding tasks
Large Language Model
Transformers English

D
microsoft
9,179
33
Indobert Lite Large P1
MIT
IndoBERT is an advanced language model for Indonesian, based on the BERT architecture, trained using masked language modeling and next sentence prediction objectives.
Large Language Model
Transformers Other

I
indobenchmark
42
0
Deberta V2 Xlarge Mnli
MIT
DeBERTa V2 XLarge is an enhanced natural language understanding model developed by Microsoft, which improves the BERT architecture through a disentangled attention mechanism and enhanced masked decoder, outperforming BERT and RoBERTa on multiple NLU tasks.
Large Language Model
Transformers English

D
microsoft
51.59k
9
Deberta V2 Xlarge
MIT
DeBERTa V2 XXLarge is an enhanced natural language understanding model developed by Microsoft, which improves the BERT architecture through a disentangled attention mechanism and enhanced masked decoder, achieving SOTA performance on multiple NLP tasks.
Large Language Model
Transformers English

D
microsoft
116.71k
22
Deberta Xlarge
MIT
DeBERTa improves upon BERT and RoBERTa models with a disentangled attention mechanism and enhanced masked decoder, demonstrating superior performance in most natural language understanding tasks.
Large Language Model
Transformers English

D
microsoft
312
2
Minilm L12 H384 Uncased
MIT
MiniLM is a compact and efficient pre-trained language model, compressed through deep self-attention distillation technology, suitable for language understanding and generation tasks.
Large Language Model
M
microsoft
10.19k
89
Deberta Large Mnli
MIT
DeBERTa-V2-XXLarge is an improved BERT model based on the disentangled attention mechanism and enhanced masked decoder, excelling in multiple natural language understanding tasks.
Large Language Model
Transformers English

D
microsoft
1.4M
18
Deberta V3 Xsmall
MIT
DeBERTaV3 is an improved version of the DeBERTa model proposed by Microsoft, which enhances efficiency through ELECTRA-style gradient-disentangled embedding sharing pretraining method, demonstrating excellent performance in natural language understanding tasks.
Large Language Model
Transformers English

D
microsoft
87.40k
43
Deberta V2 Xlarge
MIT
DeBERTa is an enhanced BERT decoding model based on the disentangled attention mechanism, surpassing the performance of BERT and RoBERTa on multiple natural language understanding tasks through improved attention mechanisms and enhanced masked decoders.
Large Language Model
Transformers English

D
kamalkraj
302
0
Deberta Base
MIT
DeBERTa is an enhanced BERT decoding model based on the disentangled attention mechanism, improving upon BERT and RoBERTa models, and excels in natural language understanding tasks.
Large Language Model
Transformers English

D
kamalkraj
287
0
Dialogpt Small Rick Sanchez
A large-scale generative language model developed by OpenAI, capable of understanding and generating natural language text
Large Language Model
Transformers

D
matprado
26
1
Deberta V3 Base
MIT
DeBERTaV3 is an improved pre-trained language model based on DeBERTa, which enhances efficiency through gradient-disentangled embedding sharing in ELECTRA-style pretraining and excels in natural language understanding tasks.
Large Language Model English
D
microsoft
1.6M
316
Indobert Lite Base P1
MIT
IndoBERT is a BERT model variant tailored for the Indonesian language, trained using masked language modeling and next sentence prediction objectives. The Lite version is a lightweight model suitable for resource-constrained environments.
Large Language Model
Transformers Other

I
indobenchmark
723
0
ZSD Microsoft V2xxlmnli
MIT
An enhanced BERT decoding model based on the decoupled attention mechanism, a large-scale version fine-tuned on the MNLI task.
Large Language Model
Transformers English

Z
NDugar
59
3
V3large 2epoch
MIT
DeBERTa is an enhanced BERT improvement model based on the disentangled attention mechanism. With 160GB of training data and 1.5 billion parameters, it surpasses the performance of BERT and RoBERTa in multiple natural language understanding tasks.
Large Language Model
Transformers English

V
NDugar
31
0
1epochv3
MIT
DeBERTa is an enhanced BERT model based on the disentangled attention mechanism, surpassing BERT and RoBERTa in multiple natural language understanding tasks
Large Language Model
Transformers English

1
NDugar
28
0
Debertav3 Mnli Snli Anli
DeBERTa is an enhanced BERT decoding model based on the disentangled attention mechanism, which improves upon BERT and RoBERTa models and performs better in most natural language understanding tasks.
Large Language Model
Transformers English

D
NDugar
26
3
V3large 1epoch
MIT
DeBERTa is an enhanced BERT decoder model based on the disentangled attention mechanism, excelling in natural language understanding tasks.
Large Language Model
Transformers English

V
NDugar
32
0
Deberta Large Mnli Zero Cls
MIT
DeBERTa is an enhanced BERT decoding model based on the disentangled attention mechanism, surpassing BERT and RoBERTa in multiple natural language understanding tasks by improving the attention mechanism and masked decoder.
Large Language Model
Transformers English

D
Narsil
51.27k
14
V2xl Again Mnli
MIT
DeBERTa is an enhanced BERT decoding model based on the disentangled attention mechanism. By improving the attention mechanism and masked decoder, it surpasses the performance of BERT and RoBERTa in multiple natural language understanding tasks.
Large Language Model
Transformers English

V
NDugar
30
0
Featured Recommended AI Models